6 research outputs found

    Survey on 5G Second Phase RAN Architectures and Functional Splits

    Get PDF
    The Radio Access Network (RAN) architecture evolves with different generations of mobile communication technologies and forms an indispensable component of the mobile network architecture. The main component of the RAN infrastructure is the base station, which includes a Radio Frequency unit and a baseband unit. The RAN is a collection of base stations connected to the core network to provide coverage through one or more radio access technologies. The advancement towards cloud native networks has led to centralizing the baseband processing of radio signals. There is a trade-off between the advantages of RAN centralization (energy efficiency, power cost reduction, and the cost of the fronthaul) and the complexity of carrying traffic between the data processing unit and distributed antennas. 5G networks hold high potential for adopting the centralized architecture to reduce maintenance costs while reducing deployment costs and improving resilience, reliability, and coordination. Incorporating the concept of virtualization and centralized RAN architecture enables to meet the overall requirements for both the customer and Mobile Network Operator. Functional splitting is one of the key enablers for 5G networks. It supports Centralized RAN, virtualized Radio Access Network, and the recent Open Radio Access Networks. This survey provides a comprehensive tutorial on the paradigms of the RAN architecture evolution, its key features, and implementation challenges. It provides a thorough review of the 3rd Generation Partnership Project functional splitting complemented by associated challenges and potential solutions. The survey also presents an overview of the fronthaul and its requirements and possible solutions for implementation, algorithms, and required tools whilst providing a vision of the evaluation beyond 5G second phase.info:eu-repo/semantics/submittedVersio

    Enhancements of Positioning for IoT Devices

    No full text
    The aim of this thesis work is to find novel method(s), which enhance the performance of the existing Observed Time Difference of Arrival (OTDOA) positioning technique for Internet of Thing (IoT) devices introduced in Third Generation Partnership Project (3GPP) standard release 14. In this thesis work, NarrowBand IoT (NB-IoT) positioning is considered as the baseline. The scope includes the investigation on positioning reference signal (PRS), including modifying/replacing the current sequence generation by mathematical derivations, and modifying PRS configuration (e.g. increasing the density, new time/frequency resource grid). Moreover, different correlator designs at the receiver side (either operating in time or frequency domain) have been analyzed in terms of performance and complexity. In addition, we also investigate the impact of PRS transmission extension either in time domain by sending multiple subframes or in frequency domain by utilizing more spectrum resources (using more Physical Resource Blocks(PRBs)). Our results show that the current NB-IoT PRS resource mapping is already well designed. The newly suggested sequences and resource time/frequency grids influence the correlation properties which in turn affect the positioning accuracies. Increasing the number of resources for PRS transmission in time/frequency results in positioning accuracy improvements. Different sequences for PRS can lead to performance improvements with the cost of implementation complexity, and design flexibility. In addition, optimizing the original gold sequence shows a consistent good performance gain.The Internet of Things (IoT) is a technology that allows the connectivity of smart devices and items to the Internet. IoT is becoming an increasingly growing topic since it has been estimated that billions of devices will be connected to the internet by 2025. The strong growth in IoT market will trigger a revolutionary in many industries, such as healthcare, agriculture, automotive, safety and security. Thus, our daily life will be significantly affected by the development of this technology in the future. However, IoT will allow endless connections to take place which will open a door of many challenges such as strict requirements of devices power consumption, cost and complexity. Due to its wide range of applications, a lot of new standards have emerged to support IoT integration. Narrowband IoT (NB-IoT) is one of the newest cellular technologies which connects many low-cost devices in severe coverage situations with a lower power consumption and a longer battery lifetime. In 3rd Generation Partnership Project (3GPP) release 14, positioning feature has been introduced to NB-IoT which will allow the location of devices to be determined by using the transmitted positioning reference signal (PRS) at the base-station side. This positioning technique is known as Observed Time Difference of Arrival (OTDOA), because the device position can be obtained by measuring the time of arrival of multiple signals from multiple base-stations. Then device position can be estimated by measuring the difference between the time of arrivals. Nevertheless, it is a quite challenging task to implement OTDOA in NB-IoT devices, due to the constraint of low cost and the strict power consumption requirements. Furthermore, extreme coverage conditions are expected for NB-IoT devices that affect received signal levels which in turn influences the positioning accuracy. In this thesis, an OTDOA positioning simulation platform has been built. Several approaches have been implemented at transmitter side with an objective of improving positioning accuracy. The methods include generating various types of sequences, changing the original sequence mapping, and proposing new designs of the time-frequency resource grid in the system.It is shown that the current standard is well designed and has a good positioning performance. A lot of ideas have been implemented within this thesis scope, which lead to adequate accuracy under certain circumstances with acceptable complexity. For future work, more investigation of new sequences can be conducted to be used instead of the current standard. In addition, advanced low power receiver algorithms can be considered

    Modified Gold Sequence for Positioning Enhancement in NB-IoT

    No full text
    Positioning is an essential feature in Narrow-Band Internet-of-Things (NB-IoT) systems. Observed Time Difference of Arrival is one of the supported positioning techniques for NB-IoT. It utilizes the downlink NB positioning reference signal (NPRS) generated based on a length-31 Gold sequence. Although a Gold sequence has good auto-correlation and cross-correlation properties, the correlation properties of NPRS in NB-IoT are still sub-optimal. The reason is mainly due to two facts: the number of NPRS symbols in each subframe is limited, and the featured sampling-rate is low. In this paper, we propose to modify the NPRS generation by exploiting the cross-correlation function of the NPRS. That is, for each orthogonal frequency division multiplexing (OFDM) symbol we generate the first NPRS symbol as specified in the current standard, i.e., a Gold sequence; while the second OFDM symbol is set to the additive inverse of the first one. Our simulation results show that the proposed NPRS sequence results in improving the correlation properties, particularly with respect to the cross-correlation property. Furthermore, 15% -30% positioning-accuracy improvements can be attained with the proposed method, compared to the legacy one under both Additive White Gaussian Noise and Extended-Pedestrian-A channels. The proposed NPRS sequence can also be applied to other similar systems, such as long-term-evolution (LTE)

    Performance bounds with precoding matrices compliant with standardized 5G-NR for MIMO transmission

    Get PDF
    Proceeding of: IEEE Conference on Standards for Communications and Networking (CSCN 2022), 28-30 November 2022, Thessaloniki, GreeceAdvanced multiple-input multiple-output (MIMO) beamforming techniques are crucial in 5G New Radio (NR) to achieve the expected data rate values. Therefore, the 3rd Generation Partnership Project (3GPP) has proposed a codebook-based MIMO precoding strategy to provide high diversity, array gain, and spatial multiplexing. The main goal is to obtain a tradeoff between performance, signal overhead, and complexity. The precoding matrix is selected from a set of predefined codebooks based on the knowledge that the 5G-NR base station (gNB) acquires about the channel status. In this work, a detailed study of the precoding matrix design is provided following the guidelines reported in the technical specifications 38-211 and 38-214 of the 3GPP. An analysis of the performance in terms of spectral efficiency (SE) achieved by the 5G-NR precoding matrices is illustrated for a single-user MIMO scenario. These results are contrasted against the optimal singular value decomposition (SVD) solution in order to explore the gap between the standardized precoding proposal and the optimal one. Several values of signal-to-noise ratio (SNR) and different antenna array configurations are considered. Moreover, the multiplexing gain for a different number of parallel data streams is evaluated. Numerical results show the SE bounds that can be obtained with the 5G-NR precoding matrices. These insights are of key importance for practical implementation of precoding strategies in 5G-NR systems and beyond

    Spectral Efficiency of Precoded 5G-NR in Single and Multi-User Scenarios under Imperfect Channel Knowledge: A Comprehensive Guide for Implementation

    No full text
    Digital precoding techniques have been widely applied in multiple-input multiple-output (MIMO) systems to enhance spectral efficiency (SE) which is crucial in 5G New Radio (NR). Therefore, the 3rd Generation Partnership Project (3GPP) has developed codebook-based MIMO precoding strategies to achieve a good trade-off between performance, complexity, and signal overhead. This paper aims to evaluate the performance bounds in SE achieved by the 5G-NR precoding matrices in single-user (SU) and multi-user (MU) MIMO systems, namely Type I and Type II, respectively. The implementation of these codebooks is covered providing a comprehensive guide with a detailed analysis. The performance of the 5G-NR precoder is compared with theoretical precoding techniques such as singular value decomposition (SVD) and block-diagonalization to quantify the margin of improvement of the standardized methods. Several configurations of antenna arrays, number of antenna ports, and parallel data streams are considered for simulations. Moreover, the effect of channel estimation errors on the system performance is analyzed in both SU and MU-MIMO cases. For a realistic framework, the SE values are obtained for a practical deployment based on a clustered delay line (CDL) channel model. These results provide valuable insights for system designers about the implementation and performance of the 5G-NR precoding matrices

    Spectral Efficiency of Precoded 5G-NR in Single and Multi-User Scenarios under Imperfect Channel Knowledge: A Comprehensive Guide for Implementation

    Get PDF
    Digital precoding techniques have been widely applied in multiple-input multiple-output (MIMO) systems to enhance spectral efficiency (SE) which is crucial in 5G New Radio (NR). Therefore, the 3rd Generation Partnership Project (3GPP) has developed codebook-based MIMO precoding strategies to achieve a good trade-off between performance, complexity, and signal overhead. This paper aims to evaluate the performance bounds in SE achieved by the 5G-NR precoding matrices in single-user (SU) and multi-user (MU) MIMO systems, namely Type I and Type II, respectively. The implementation of these codebooks is covered providing a comprehensive guide with a detailed analysis. The performance of the 5G-NR precoder is compared with theoretical precoding techniques such as singular value decomposition (SVD) and block-diagonalization to quantify the margin of improvement of the standardized methods. Several configurations of antenna arrays, number of antenna ports, and parallel data streams are considered for simulations. Moreover, the effect of channel estimation errors on the system performance is analyzed in both SU and MU-MIMO cases. For a realistic framework, the SE values are obtained for a practical deployment based on a clustered delay line (CDL) channel model. These results provide valuable insights for system designers about the implementation and performance of the 5G-NR precoding matrices.This work has received funding from the European Union (EU) Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie ETN TeamUp5G, grant agreement No. 813391. Also, this work has been partially funded by the Spanish National project IRENE-EARTH (PID2020-115323RB-C33/AEI/10.13039/501100011033)
    corecore